279 research outputs found

    Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

    Full text link
    A recent trend in DNN development is to extend the reach of deep learning applications to platforms that are more resource and energy constrained, e.g., mobile devices. These endeavors aim to reduce the DNN model size and improve the hardware processing efficiency, and have resulted in DNNs that are much more compact in their structures and/or have high data sparsity. These compact or sparse models are different from the traditional large ones in that there is much more variation in their layer shapes and sizes, and often require specialized hardware to exploit sparsity for performance improvement. Thus, many DNN accelerators designed for large DNNs do not perform well on these models. In this work, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs. To deal with the widely varying layer shapes and sizes, it introduces a highly flexible on-chip network, called hierarchical mesh, that can adapt to the different amounts of data reuse and bandwidth requirements of different data types, which improves the utilization of the computation resources. Furthermore, Eyeriss v2 can process sparse data directly in the compressed domain for both weights and activations, and therefore is able to improve both processing speed and energy efficiency with sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65nm CMOS process achieves a throughput of 1470.6 inferences/sec and 2560.3 inferences/J at a batch size of 1, which is 12.6x faster and 2.5x more energy efficient than the original Eyeriss running MobileNet. We also present an analysis methodology called Eyexam that provides a systematic way of understanding the performance limits for DNN processors as a function of specific characteristics of the DNN model and accelerator design; it applies these characteristics as sequential steps to increasingly tighten the bound on the performance limits.Comment: accepted for publication in IEEE Journal on Emerging and Selected Topics in Circuits and Systems. This extended version on arXiv also includes Eyexam in the appendi

    FastDepth: Fast Monocular Depth Estimation on Embedded Systems

    Full text link
    Depth sensing is a critical function for robotic tasks such as localization, mapping and obstacle detection. There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. However, state-of-the-art single-view depth estimation algorithms are based on fairly complex deep neural networks that are too slow for real-time inference on an embedded platform, for instance, mounted on a micro aerial vehicle. In this paper, we address the problem of fast depth estimation on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. In particular, we focus on the design of a low-latency decoder. Our methodology demonstrates that it is possible to achieve similar accuracy as prior work on depth estimation, but at inference speeds that are an order of magnitude faster. Our proposed network, FastDepth, runs at 178 fps on an NVIDIA Jetson TX2 GPU and at 27 fps when using only the TX2 CPU, with active power consumption under 10 W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset. To the best of the authors' knowledge, this paper demonstrates real-time monocular depth estimation using a deep neural network with the lowest latency and highest throughput on an embedded platform that can be carried by a micro aerial vehicle.Comment: Accepted for presentation at ICRA 2019. 8 pages, 6 figures, 7 table

    SegSort: Segmentation by Discriminative Sorting of Segments

    Full text link
    Almost all existing deep learning approaches for semantic segmentation tackle this task as a pixel-wise classification problem. Yet humans understand a scene not in terms of pixels, but by decomposing it into perceptual groups and structures that are the basic building blocks of recognition. This motivates us to propose an end-to-end pixel-wise metric learning approach that mimics this process. In our approach, the optimal visual representation determines the right segmentation within individual images and associates segments with the same semantic classes across images. The core visual learning problem is therefore to maximize the similarity within segments and minimize the similarity between segments. Given a model trained this way, inference is performed consistently by extracting pixel-wise embeddings and clustering, with the semantic label determined by the majority vote of its nearest neighbors from an annotated set. As a result, we present the SegSort, as a first attempt using deep learning for unsupervised semantic segmentation, achieving 76%76\% performance of its supervised counterpart. When supervision is available, SegSort shows consistent improvements over conventional approaches based on pixel-wise softmax training. Additionally, our approach produces more precise boundaries and consistent region predictions. The proposed SegSort further produces an interpretable result, as each choice of label can be easily understood from the retrieved nearest segments.Comment: In ICCV 2019. Webpage & Code: https://jyhjinghwang.github.io/projects/segsort.htm

    Prenatal diagnosis of a de novo 9p terminal chromosomal deletion in a fetus with major congenital anomalies

    Get PDF
    AbstractObjectiveWe describe a prenatal ultrasonography diagnosis of omphalocele and symbrachydactyly in a fetus and review the literature on prenatal diagnosis of 9p terminal chromosomal deletions.Case reportA 31-year-old woman (gravida 3, para 1) was referred for genetic counseling because a fetal omphalocele had been detected. Prenatal ultrasonography at 17+ weeks of gestational age revealed a singleton female fetus with biometry equivalent to 18 weeks with an omphalocele. In addition, symbrachydactyly was also noted in the right arm; the wrist bones as well as the metacarpals were missing. A chromosomal study was arranged for a congenital anomaly involving omphalocele. We obtained Giemsa-banded chromosomes from fetal tissue cells, and an abnormal male karyotype with a terminal deletion of the short arm of chromosome 9 at band 9p13 was noted. After delivery, the fetus showed omphalocele, symbrachydactyly, trigonocephaly, sex reversal, a long philtrum, low-set ears, telecanthus, and a frontal prominence.ConclusionPrenatal diagnosis of abnormal ultrasound findings with omphalocele and symbrachydactyly should include the differential diagnosis of a chromosome 9p deletion

    Using Sidereal Rotation Period Expressions to Calculate the Sun’s Rotation Period through Observation of Sunspots

    Get PDF
    We utilize sidereal rotation period expressions to calculate the sun’s rotation period via sunspot observation. From the well-known astronomical sites, we collected sunspot diagrams for 14 months, from January 2013 to February 2014, to analyze, compare, and implement statistical research. In addition to acquiring the average angular rate of the movement of sunspots, we found that even the same number of sunspots moved at different angular rates, and generally the life of larger sunspots is longer than 10 days. Therefore the larger sunspots moved around the back of the sun, and a handful of relatively smaller sunspots disappeared within a few days. The results show that the solar rotation period varied with the latitude. However, if we take the average of the sunspots at high and low latitudes, we find that the calculated value is very close to the accredited values

    Potassium {4-[(3S,6S,9S)-3,6-dibenzyl-9-isopropyl-4,7,10-trioxo-11–oxa-2,5,8-triazadodecyl]phenyl}trifluoroborate

    Get PDF
    [[abstract]]The reported compound 4 was synthesized and fully characterized by 1H NMR, 13C NMR, 11B NMR, 19F NMR, and high resolution mass spectrometry.[[booktype]]電子版[[countrycodes]]CH
    corecore